81 research outputs found
Graph Based Reduction of Program Verification Conditions
Increasing the automaticity of proofs in deductive verification of C programs
is a challenging task. When applied to industrial C programs known heuristics
to generate simpler verification conditions are not efficient enough. This is
mainly due to their size and a high number of irrelevant hypotheses. This work
presents a strategy to reduce program verification conditions by selecting
their relevant hypotheses. The relevance of a hypothesis is determined by the
combination of a syntactic analysis and two graph traversals. The first graph
is labeled by constants and the second one by the predicates in the axioms. The
approach is applied on a benchmark arising in industrial program verification
Steganography: a Class of Algorithms having Secure Properties
Chaos-based approaches are frequently proposed in information hiding, but
without obvious justification. Indeed, the reason why chaos is useful to tackle
with discretion, robustness, or security, is rarely elucidated. This research
work presents a new class of non-blind information hidingalgorithms based on
some finite domains iterations that are Devaney's topologically chaotic. The
approach is entirely formalized and reasons to take place into the mathematical
theory of chaos are explained. Finally, stego-security and chaos security are
consequently proven for a large class of algorithms.Comment: 4 pages, published in Seventh International Conference on Intelligent
Information Hiding and Multimedia Signal Processing, IIH-MSP 2011, Dalian,
China, October 14-16, 201
Steganography: a class of secure and robust algorithms
This research work presents a new class of non-blind information hiding
algorithms that are stego-secure and robust. They are based on some finite
domains iterations having the Devaney's topological chaos property. Thanks to a
complete formalization of the approach we prove security against watermark-only
attacks of a large class of steganographic algorithms. Finally a complete study
of robustness is given in frequency DWT and DCT domains.Comment: Published in The Computer Journal special issue about steganograph
Application of Steganography for Anonymity through the Internet
In this paper, a novel steganographic scheme based on chaotic iterations is
proposed. This research work takes place into the information hiding security
framework. The applications for anonymity and privacy through the Internet are
regarded too. To guarantee such an anonymity, it should be possible to set up a
secret communication channel into a web page, being both secure and robust. To
achieve this goal, we propose an information hiding scheme being stego-secure,
which is the highest level of security in a well defined and studied category
of attacks called "watermark-only attack". This category of attacks is the best
context to study steganography-based anonymity through the Internet. The
steganalysis of our steganographic process is also studied in order to show it
security in a real test framework.Comment: 14 page
An Easy-to-use and Robust Approach for the Differentially Private De-Identification of Clinical Textual Documents
Unstructured textual data is at the heart of healthcare systems. For obvious
privacy reasons, these documents are not accessible to researchers as long as
they contain personally identifiable information. One way to share this data
while respecting the legislative framework (notably GDPR or HIPAA) is, within
the medical structures, to de-identify it, i.e. to detect the personal
information of a person through a Named Entity Recognition (NER) system and
then replacing it to make it very difficult to associate the document with the
person. The challenge is having reliable NER and substitution tools without
compromising confidentiality and consistency in the document. Most of the
conducted research focuses on English medical documents with coarse
substitutions by not benefiting from advances in privacy. This paper shows how
an efficient and differentially private de-identification approach can be
achieved by strengthening the less robust de-identification method and by
adapting state-of-the-art differentially private mechanisms for substitution
purposes. The result is an approach for de-identifying clinical documents in
French language, but also generalizable to other languages and whose robustness
is mathematically proven
Gene Similarity-based Approaches for Determining Core-Genes of Chloroplasts
In computational biology and bioinformatics, the manner to understand
evolution processes within various related organisms paid a lot of attention
these last decades. However, accurate methodologies are still needed to
discover genes content evolution. In a previous work, two novel approaches
based on sequence similarities and genes features have been proposed. More
precisely, we proposed to use genes names, sequence similarities, or both,
insured either from NCBI or from DOGMA annotation tools. Dogma has the
advantage to be an up-to-date accurate automatic tool specifically designed for
chloroplasts, whereas NCBI possesses high quality human curated genes (together
with wrongly annotated ones). The key idea of the former proposal was to take
the best from these two tools. However, the first proposal was limited by name
variations and spelling errors on the NCBI side, leading to core trees of low
quality. In this paper, these flaws are fixed by improving the comparison of
NCBI and DOGMA results, and by relaxing constraints on gene names while adding
a stage of post-validation on gene sequences. The two stages of similarity
measures, on names and sequences, are thus proposed for sequence clustering.
This improves results that can be obtained using either NCBI or DOGMA alone.
Results obtained with this quality control test are further investigated and
compared with previously released ones, on both computational and biological
aspects, considering a set of 99 chloroplastic genomes.Comment: 4 pages, IEEE International Conference on Bioinformatics and
Biomedicine (BIBM 2014
- …